Subgradient methods for convex minimization
نویسنده
چکیده
Many optimization problems arising in various applications require minimization of an objective cost function that is convex but not di erentiable. Such a minimization arises, for example, in model construction, system identi cation, neural networks, pattern classi cation, and various assignment, scheduling, and allocation problems. To solve convex but not di erentiable problems, we have to employ special methods that can work in the absence of di erentiability, while taking the advantage of convexity and possibly other special structures that our minimization problem may possess. In this thesis, we propose and analyze some new methods that can solve convex (not necessarily di erentiable) problems. In particular, we consider two classes of methods: incremental and variable metric. Thesis Supervisor: Dimitri P. Bertsekas Title: Professor
منابع مشابه
Lecture 2: Subgradient Methods
In this lecture, we discuss first order methods for the minimization of convex functions. We focus almost exclusively on subgradient-based methods, which are essentially universally applicable for convex optimization problems, because they rely very little on the structure of the problem being solved. This leads to effective but slow algorithms in classical optimization problems, however, in la...
متن کاملSelection Strategies in Projection Methods for Convex Minimization Problems
We propose new projection method for nonsmooth convex minimization problems. We present some method of subgradient selection, which is based on the so called residual selection model and is a generalization of the so called obtuse cone model. We also present numerical results for some test problems and compare these results with some other convex nonsmooth minimization methods. The numerical re...
متن کاملIncremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
We present a unifying framework for nonsmooth convex minimization bringing together -subgradient algorithms and methods for the convex feasibility problem. This development is a natural step for -subgradient methods in the direction of constrained optimization since the Euclidean projection frequently required in such methods is replaced by an approximate projection, which is often easier to co...
متن کاملMinimization of Nonsmooth Convex Functionals in Banach Spaces
We develop a uniied framework for convergence analysis of subgradient and subgradient projection methods for minimization of nonsmooth convex functionals in Banach spaces. The important novel features of our analysis are that we neither assume that the functional is uniformly or strongly convex, nor use regularization techniques. Moreover, no boundedness assumptions are made on the level sets o...
متن کاملThe proximal point method revisited∗
In this short survey, I revisit the role of the proximal point method in large scale optimization. I focus on three recent examples: a proximally guided subgradient method for weakly convex stochastic approximation, the prox-linear algorithm for minimizing compositions of convex functions and smooth maps, and Catalyst generic acceleration for regularized Empirical Risk Minimization.
متن کامل